QNAP’s pioneering QuAI solution realized a healthcare AI diagnostic system, significantly improving diagnostic accuracy and efficiency!

  • Background: From information to knowledge

    For the past 14 years, QNAP has been focusing on creating better storage solutions, and redefining Network-attached Storage (NAS) through constant product innovation. QNAP has always been clear on one point: no matter how fast science and technology develops, the core value resides in data itself. QNAP believes that the data value chain starts from data storage, and thereafter extends to data analysis and finally knowledge intelligence. It is precisely this original vision that has led QNAP on its journey of innovation. In 2018, QNAP launched the QuAI artificial intelligence software development platform. This platform bestows high-capacity NAS systems with intelligent features, and allows data scientists and developers to create, train and optimize AI models on their QNAP NAS.

    While services and applications that use AI-related technologies have unlimited potentials waiting to be explored, the demands on AI computing environments have substantially increased along with technological advancements. QuAI was developed to make AI more accessible to the masses, so that more people can enjoy its benefits. Deploying AI services on the NAS significantly reduces the complexity of settings for data storage and network configuration. With high-performance computing and powerful features, it is now possible for the NAS to perform deep learning. Expensive AI workstations are now a thing of the past, and can be replaced with a group of NAS that can be deployed fairly quickly. Not only can the NAS serve as a Training Server for AI models, it can also serve as an Inference Server, thereby further lowering the R&D entry barrier, making AI development more feasible and easily accessible.

    However, simply having QuAI and high value-for-cost NAS products is barely enough for AI application. Firstly, data scientists at QNAP had to identify the problems associated with commercial applications, isolate suitable data and methods (may be models, algorithms, etc.), and then employ AI to solve commercial problems. QNAP needed an actual case study of AI application in a vertical industry. In January 2018, QNAP’s AI Promotion Team was formed. It is the vision of this 5-person team to solve major industrial challenges with AI technology, and to improve the lives of humans through technology. It is this passion that drove the QNAP QuAI Architecture Team (hereinafter QuAI team) to focus their attention on medical technology.

  • AI case study: the greatest crossover between data scientists and doctors

    In February 2018, the QuAI team paid a visit to a renowned medical research institute in Taiwan and met with several specialists and medical professors to discuss the feasibility of deploying AI in medicine. According to Dennis Chang, the convener of the QuAI team, “First, we needed to choose a battlefield. The three key factors to successful deep learning application are data, algorithm and computation capabilities. Among them, the analyzability of data is the most fundamental.” This data that Dennis referred to is the images commonly used by medical diagnostic teams, such as computed tomography (CT), magnetic resonance imaging (MRI), optical coherence tomography (OCT), and other files generated through non-invasive image processing technologies. These images show the characteristics of lesions, which doctors can use to form a diagnosis based on their experiences. However, many of these characteristics can be extremely subtle, making the interpretation of these images immensely difficult. The entire diagnostic process is also dependent on the doctor’s training and personal experience. If QNAP is able to enhance the efficiency and accuracy of the doctor’s interpretation of the images through AI technology, the quality of healthcare can be greatly improved.

    Besides requiring a large amount of medical images for analysis, the QuAI team also need to tap the doctors’ professional knowledge to facilitate the labeling and categorizing of medical data, to gradually build a data reading and interpretation model, and to perform various experimental analysis through Deep Neural Network algorithms. This process is repeated for continuous improvement and optimization of the model architecture and algorithm to establish a diagnostic model dealing with medical images. With the help of a team of leading Taiwanese doctors in labeling the medical images, the QuAI team was very confident of completing this AI project. After rigorous discussions between both parties, a focus area was finally chosen - Intelligent Medical Auxiliary/Diagnostic System for Age-related Maculopathy.

Cancer of the eye: Age-related Maculopathy

Age-related maculopathy is the degeneration of the central part of the retina that develops with age. The macula is a small area at the center of the retina that is responsible for central vision. Once the macula degenerates, central vision will become blurred, while peripheral vision remains unaffected. This condition mainly affects reading and tasks performed close to the eye. Macular lesions generally occur in people over 55. An average of 15% of the elderly population is affected by age-related maculopathy. The lesions are usually bilateral, and cause irreversible damage to vision. Age-related maculopathy patients may lose their sight in one eye by the age of 65, and some patients will lose their sight in both eyes by 70.

In recent years, the average age of age-related maculopathy patients appears to be lowering, with people who use mobile phones intensively being at higher risk. The problem with this disease is that it does not have obvious symptoms during its early stages. Most patients realize that they are in need of medical attention only at the mid and late stages of the disease. If discovered early, treatments can significantly halt the progression of the disease. Age-related maculopathy can be effectively detected through Optical Coherence Tomography (OCT). Firstly, the patient’s retinal image is obtained through Retinal Optical Coherence Tomography; the image is then interpreted to determine if the lesion is due to diabetic macular edema, primary macular hole or age-related maculopathy. Appropriate treatment will then be administered based on the diagnosis. In general, there are two types of maculopathy, namely wet maculopathy and dry maculopathy. There is currently no cure for dry maculopathy, and the patient will be asked to go for regular follow-ups, wear sunglasses and take lutein. Wet maculopathy, on the other hand, is the deterioration of vision due to hyperplasia of blood vessels, which results in hemorrhage, exudates, and fluid buildup. Unlike dry maculopathy, wet maculopathy can be improved through treatment.

The resolution of OCT images can be as high as 2 to 5 microns, and OCT can provide high-resolution transverse and cross-sectional 3D images When examining the macular area with OCT, a highly-accurate image of the macula can be obtained within 3 seconds after the scan. The structures of the lesions can be clearly seen without the need for pupil dilation, or the use of a fluorescent dye for fluorescein angiography. However, the interpretation of the images obtained poses another challenge. The medical team pointed out that “Diagnosing macular lesions through medical imaging is a time-consuming and labor-intensive task. Not only do the ophthalmologists need to undergo professional training, the interpretation of the images requires repeated review and discussion. In rural areas or regions with limited medical resources, local examiners will serve as the front-line diagnostic personnel. But they may not have the adequate diagnostic ability or confidence to make the right diagnosis. Furthermore, patients often have to wait a few weeks for the results of the interpretation, and lose valuable treatment time.” Dennis believes that it may be possible to significantly shorten the OCT image interpretation time with the help of AI technology. AI can help doctors make accurate diagnosis, and thereby allow patients to receive treatment earlier and reduce the risk of losing their sight.

  • It all begins with storage: Storage solution for medical data

    The first challenge of this project is to find a suitable storage space for the massive quantity of OCT images, and it must be one that comply with associated requirements. Electronic protected health information (ePHI) is highly confidential. Depending on the requirements of the medical industry, protected health information may need to be stored over a long period of time (the statutory storage period is seven years), the data must be completely backed up, and data storage should comply with HIPAA (Health Insurance Portability and Accountability Act ) stipulations. Not only does the high level of security accorded by QNAP NAS comply with HIPAA specifications, on January 2018, QNAP announced its plans to further expand on the integration of the Orthanc software suite. Orthanc is specially designed for the medical and health care industry. It is a lightweight professional medical imaging service software that transforms the QNAP NAS into a Picture Archiving and Communication System (PACS), which can significantly enhance the medical image processing workflow. The analysis of medical images can be significantly simplified by simply storing all Digital Imaging and Communications (DICOM) data onto the NAS, and making use of the advanced web-based DICOM viewer.

    As QNAP NAS supports HIPAA, various medical institutions are now able to send large amounts of data to a central TS-1685 through several NAS in a manner that is compliant with security specifications. Using the QNAP NAS’s 10GbE high-speed connectivity, the QuAI team was able to collect the medical images in a fairly short period of time. At the same time, Dennis configured the QNAP Hybrid Backup Sync software to back up all the files. If the content of a file was accidentally deleted or modified, the Snapshot function can be used to quickly restore it. After ensuring that the data is completely secure, the team then proceeded to generate the DICOM files, analyze the DICOM tags and write into the database for storage. The Orthanc web interface allows doctors to send DICOM images to the NAS by dragging and dropping, and to search for their patient’s images through tags. In addition, each case may be quickly examined and analyzed using the DICOM image viewer.

    The benefits of having the QNAP NAS as the DICOM server include cost savings on IT equipment on the part of the hospitals, allowing the doctors on-call to connect to Orthanc through mobile devices and to view medical images sent to the NAS on their mobile devices through the Orthanc mobile phone app.

Learning phase: Implementing QuAI and JupyterHub

The QuAI team chose to use the TS-1685 that supports twelve 3.5-inch hard disks and four 2.5-inch SSDs. It is powered by a high-end Intel® Xeon® D processor, and allows for installing a Nvidia graphics card for accelerated performance to ensure that the TS-1685 has sufficient computing power to perform AI computations. The operating system of the QNAP NAS - QTS - features GPU passthrough for allocating a graphics card to the AI applications. Therefore, the performance of the TS-1685 is compatible with a professional AI workstation, yet with a lower total cost of ownership (TCO). Compared with the complex billing plan of public clouds’ AI solutions, storing terabytes of data on the large-capacity TS-1685 is more economically viable.

The QuAI team used the in-house developed JupyterHub software to program the AI algorithms. The Jupyter is open-source software and can now run on QNAP NAS. It uses interpreted language to facilitate the programming of algorithms and to execute commands sequentially. It allows easy visualization and documentation of data, and facilitates collaborative programming among the team. It is widely used by data scientists.

In this project, the QuAI team collected several tens of thousands of OCT medical images, and collaborated with 6 professional doctors to label the images based on the four commonly-encountered macular degeneration conditions over the period of one month. The team preprocessed the images to help the TS-1685 learn image recognition more quickly, including mirror image processing, eliminating invalid images, and re-sizing the images. Having acquired a large quantity of high-quality data, the team then embarked on the construction of the deep neural network.

The QuAI team divided the samples between the Training Dataset and Validation Dataset in a predefined proportion, and then imported them into a TS-1685 installed with the QuAI software development platform. They subjected the TS-1685 to repeated learning and verification. The entire process was repeated for more than 100 experimental configurations, during which the parameters were constantly adjusted for optimal performance. Upon completion of the experimental procedure, the preliminary AI model on the TS-1685 was completed. At this time, the original data was used as the Test Dataset to test the accuracy of the model, and to configure the optimizer and define the learning objectives.

The entire process of training and testing took only two months. The resulting AI model could achieve a 95% accuracy rate, which is far higher to manual interpretation. The TS-1685 requires less than 100 milliseconds to identify OCT images. The completed AI model can easily be deployed on multiple QNAP NAS, and the NAS will then serve as an Inference Server to allow different hospitals to use of the Age-related Maculopathy AI Diagnostic System concurrently. Therefore, the NAS at different organizations and institutions are enabled to have access to the central intelligence, and in turn can serve a wider population of people.

  • Vision: To build a medical care and diagnostic assistance system for rural areas

    The QuAI team delivered outstanding results in a short period of 5 months. This AI Diagnostic System can help doctors make faster and more accurate diagnosis of age-related macular degeneration. Its low cost and ease of installation mean that it can be easily deployed in rural areas or remote regions with limited medical resources. The elderly in the rural areas need only complete the OCT imaging, and the diagnosis can be made immediately, allowing for arranging further treatment for patients in time when needed. Speaking on behalf of QuAI team, Dennis said, "We are delighted to see the fruits of our research benefiting the health of the people. The future development of AI must proceed in close collaboration with all industries, so that more AI applications can be developed to cater to actual needs. All industries should also explore the possibilities of incorporating AI at different levels. What we should consider is not the kinds of jobs that can be potentially replaced by AI, but rather, how AI can help humans complete more jobs and create more precise economies of scale." The QuAI team will be moving on to explore more medical applications, and to develop high-precision AI-assisted medical services, including NGS gene sequencing, brain tumor detection, tumor analysis, and radiotherapy. It is our hope that with efforts from both the people and the government, Taiwan can become a global leader in AI in the near future.

Choose specification

      Show more Less

      Choose Your Country or Region

      open menu
      back to top